157 research outputs found

    Tuning the Diversity of Open-Ended Responses from the Crowd

    Full text link
    Crowdsourcing can solve problems that current fully automated systems cannot. Its effectiveness depends on the reliability, accuracy, and speed of the crowd workers that drive it. These objectives are frequently at odds with one another. For instance, how much time should workers be given to discover and propose new solutions versus deliberate over those currently proposed? How do we determine if discovering a new answer is appropriate at all? And how do we manage workers who lack the expertise or attention needed to provide useful input to a given task? We present a mechanism that uses distinct payoffs for three possible worker actions---propose,vote, or abstain---to provide workers with the necessary incentives to guarantee an effective (or even optimal) balance between searching for new answers, assessing those currently available, and, when they have insufficient expertise or insight for the task at hand, abstaining. We provide a novel game theoretic analysis for this mechanism and test it experimentally on an image---labeling problem and show that it allows a system to reliably control the balance betweendiscovering new answers and converging to existing ones

    Crowdsourcing Accessibility: Human-Powered Access Technologies

    Get PDF
    People with disabilities have always engaged the people around them in order to circumvent inaccessible situations, allowing them to live more independently and get things done in their everyday lives. Increasing connectivity is allowing this approach to be extended to wherever and whenever it is needed. Technology can leverage this human work force to accomplish tasks beyond the capabilities of computers, increasing how accessible the world is for people with disabilities. This article outlines the growth of online human support, outlines a number of projects in this space, and presents a set of challenges and opportunities for this work going forward

    Screen Correspondence: Mapping Interchangeable Elements between UIs

    Full text link
    Understanding user interface (UI) functionality is a useful yet challenging task for both machines and people. In this paper, we investigate a machine learning approach for screen correspondence, which allows reasoning about UIs by mapping their elements onto previously encountered examples with known functionality and properties. We describe and implement a model that incorporates element semantics, appearance, and text to support correspondence computation without requiring any labeled examples. Through a comprehensive performance evaluation, we show that our approach improves upon baselines by incorporating multi-modal properties of UIs. Finally, we show three example applications where screen correspondence facilitates better UI understanding for humans and machines: (i) instructional overlay generation, (ii) semantic UI element search, and (iii) automated interface testing

    The Effects of Sequence and Delay on Crowd Work

    Full text link
    A common approach in crowdsourcing is to break large tasks into small microtasks so that they can be parallelized across many crowd workers and so that redundant work can be more easily compared for quality control. In practice, this can re-sult in the microtasks being presented out of their natural order and often introduces delays between individual micro-tasks. In this paper, we demonstrate in a study of 338 crowd workers that non-sequential microtasks and the introduction of delays significantly decreases worker performance. We show that interruptions where a large delay occurs between two related tasks can cause up to a 102 % slowdown in com-pletion time, and interruptions where workers are asked to perform different tasks in sequence can slow down comple-tion time by 57%. We conclude with a set of design guide-lines to improve both worker performance and realized pay, and instructions for implementing these changes in existing interfaces for crowd work. Author Keywords Crowdsourcing; human computation; workflows; continuity

    Introducing people with ASD to crowd work

    Get PDF

    Never-ending Learning of User Interfaces

    Full text link
    Machine learning models have been trained to predict semantic information about user interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently, most models rely on datasets that are collected and labeled by human crowd-workers, a process that is costly and surprisingly error-prone for certain tasks. For example, it is possible to guess if a UI element is "tappable" from a screenshot (i.e., based on visual signifiers) or from potentially unreliable metadata (e.g., a view hierarchy), but one way to know for certain is to programmatically tap the UI element and observe the effects. We built the Never-ending UI Learner, an app crawler that automatically installs real apps from a mobile app store and crawls them to discover new and challenging training examples to learn from. The Never-ending UI Learner has crawled for more than 5,000 device-hours, performing over half a million actions on 6,000 apps to train three computer vision models for i) tappability prediction, ii) draggability prediction, and iii) screen similarity
    • …
    corecore